skip to main content


Search for: All records

Editors contains: "Braverman, Mark"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Braverman, Mark (Ed.)
  2. Braverman, Mark (Ed.)
    Grothendieck’s inequality [Grothendieck, 1953] states that there is an absolute constant K > 1 such that for any n× n matrix A, ‖A‖_{∞→1} := max_{s,t ∈ {± 1}ⁿ}∑_{i,j} A[i,j]⋅s(i)⋅t(j) ≥ 1/K ⋅ max_{u_i,v_j ∈ S^{n-1}}∑_{i,j} A[i,j]⋅⟨u_i,v_j⟩. In addition to having a tremendous impact on Banach space theory, this inequality has found applications in several unrelated fields like quantum information, regularity partitioning, communication complexity, etc. Let K_G (known as Grothendieck’s constant) denote the smallest constant K above. Grothendieck’s inequality implies that a natural semidefinite programming relaxation obtains a constant factor approximation to ‖A‖_{∞ → 1}. The exact value of K_G is yet unknown with the best lower bound (1.67…) being due to Reeds and the best upper bound (1.78…) being due to Braverman, Makarychev, Makarychev and Naor [Braverman et al., 2013]. In contrast, the little Grothendieck inequality states that under the assumption that A is PSD the constant K above can be improved to π/2 and moreover this is tight. The inapproximability of ‖A‖_{∞ → 1} has been studied in several papers culminating in a tight UGC-based hardness result due to Raghavendra and Steurer (remarkably they achieve this without knowing the value of K_G). Briet, Regev and Saket [Briët et al., 2015] proved tight NP-hardness of approximating the little Grothendieck problem within π/2, based on a framework by Guruswami, Raghavendra, Saket and Wu [Guruswami et al., 2016] for bypassing UGC for geometric problems. This also remained the best known NP-hardness for the general Grothendieck problem due to the nature of the Guruswami et al. framework, which utilized a projection operator onto the degree-1 Fourier coefficients of long code encodings, which naturally yielded a PSD matrix A. We show how to extend the above framework to go beyond the degree-1 Fourier coefficients, using the global structure of optimal solutions to the Grothendieck problem. As a result, we obtain a separation between the NP-hardness results for the two problems, obtaining an inapproximability result for the Grothendieck problem, of a factor π/2 + ε₀ for a fixed constant ε₀ > 0. 
    more » « less
  3. Braverman, Mark (Ed.)
    A longstanding open problem in coding theory is to determine the best (asymptotic) rate R₂(δ) of binary codes with minimum constant (relative) distance δ. An existential lower bound was given by Gilbert and Varshamov in the 1950s. On the impossibility side, in the 1970s McEliece, Rodemich, Rumsey and Welch (MRRW) proved an upper bound by analyzing Delsarte’s linear programs. To date these results remain the best known lower and upper bounds on R₂(δ) with no improvement even for the important class of linear codes. Asymptotically, these bounds differ by an exponential factor in the blocklength. In this work, we introduce a new hierarchy of linear programs (LPs) that converges to the true size A^{Lin}₂(n,d) of an optimum linear binary code (in fact, over any finite field) of a given blocklength n and distance d. This hierarchy has several notable features: 1) It is a natural generalization of the Delsarte LPs used in the first MRRW bound. 2) It is a hierarchy of linear programs rather than semi-definite programs potentially making it more amenable to theoretical analysis. 3) It is complete in the sense that the optimum code size can be retrieved from level O(n²). 4) It provides an answer in the form of a hierarchy (in larger dimensional spaces) to the question of how to cut Delsarte’s LP polytopes to approximate the true size of linear codes. We obtain our hierarchy by generalizing the Krawtchouk polynomials and MacWilliams inequalities to a suitable "higher-order" version taking into account interactions of 𝓁 words. Our method also generalizes to translation schemes under mild assumptions. 
    more » « less
  4. Braverman, Mark (Ed.)
    We develop approximation algorithms for set-selection problems with deterministic constraints, but random objective values, i.e., stochastic probing problems. When the goal is to maximize the objective, approximation algorithms for probing problems are well-studied. On the other hand, few techniques are known for minimizing the objective, especially in the adaptive setting, where information about the random objective is revealed during the set-selection process and allowed to influence it. For minimization problems in particular, incorporating adaptivity can have a considerable effect on performance. In this work, we seek approximation algorithms that compare well to the optimal adaptive policy. We develop new techniques for adaptive minimization, applying them to a few problems of interest. The core technique we develop here is an approximate reduction from an adaptive expectation minimization problem to a set of adaptive probability minimization problems which we call threshold problems. By providing near-optimal solutions to these threshold problems, we obtain bicriteria adaptive policies. We apply this method to obtain an adaptive approximation algorithm for the Min-Element problem, where the goal is to adaptively pick random variables to minimize the expected minimum value seen among them, subject to a knapsack constraint. This partially resolves an open problem raised in [Goel et al., 2010]. We further consider three extensions on the Min-Element problem, where our objective is the sum of the smallest k element-weights, or the weight of the min-weight basis of a given matroid, or where the constraint is not given by a knapsack but by a matroid constraint. For all three of the variations we explore, we develop adaptive approximation algorithms for their corresponding threshold problems, and prove their near-optimality via coupling arguments. 
    more » « less
  5. Braverman, Mark (Ed.)
    We further the study of supercritical tradeoffs in proof and circuit complexity, which is a type of tradeoff between complexity parameters where restricting one complexity parameter forces another to exceed its worst-case upper bound. In particular, we prove a new family of supercritical tradeoffs between depth and size for Resolution, Res(k), and Cutting Planes proofs. For each of these proof systems we construct, for each c ≤ n^{1-ε}, a formula with n^{O(c)} clauses and n variables that has a proof of size n^{O(c)} but in which any proof of size no more than roughly exponential in n^{1-ε}/c must necessarily have depth ≈ n^c. By setting c = o(n^{1-ε}) we therefore obtain exponential lower bounds on proof depth; this far exceeds the trivial worst-case upper bound of n. In doing so we give a simplified proof of a supercritical depth/width tradeoff for tree-like Resolution from [Alexander A. Razborov, 2016]. Finally, we outline several conjectures that would imply similar supercritical tradeoffs between size and depth in circuit complexity via lifting theorems. 
    more » « less
  6. Braverman, Mark (Ed.)
    We present a framework for speeding up the time it takes to sample from discrete distributions $\mu$ defined over subsets of size $k$ of a ground set of $n$ elements, in the regime where $k$ is much smaller than $n$. We show that if one has access to estimates of marginals $\mathbb{P}_{S\sim \mu}[i\in S]$, then the task of sampling from $\mu$ can be reduced to sampling from related distributions $\nu$ supported on size $k$ subsets of a ground set of only $n^{1-\alpha}\cdot \operatorname{poly}(k)$ elements. Here, $1/\alpha\in [1, k]$ is the parameter of entropic independence for $\mu$. Further, our algorithm only requires sparsified distributions $\nu$ that are obtained by applying a sparse (mostly $0$) external field to $\mu$, an operation that for many distributions $\mu$ of interest, retains algorithmic tractability of sampling from $\nu$. This phenomenon, which we dub domain sparsification, allows us to pay a one-time cost of estimating the marginals of $\mu$, and in return reduce the amortized cost needed to produce many samples from the distribution $\mu$, as is often needed in upstream tasks such as counting and inference. For a wide range of distributions where $\alpha=\Omega(1)$, our result reduces the domain size, and as a corollary, the cost-per-sample, by a $\operatorname{poly}(n)$ factor. Examples include monomers in a monomer-dimer system, non-symmetric determinantal point processes, and partition-constrained Strongly Rayleigh measures. Our work significantly extends the reach of prior work of Anari and Derezi\'nski who obtained domain sparsification for distributions with a log-concave generating polynomial (corresponding to $\alpha=1$). As a corollary of our new analysis techniques, we also obtain a less stringent requirement on the accuracy of marginal estimates even for the case of log-concave polynomials; roughly speaking, we show that constant-factor approximation is enough for domain sparsification, improving over $O(1/k)$ relative error established in prior work. 
    more » « less
  7. Braverman, Mark (Ed.)
    For an abelian group H acting on the set [𝓁], an (H,𝓁)-lift of a graph G₀ is a graph obtained by replacing each vertex by 𝓁 copies, and each edge by a matching corresponding to the action of an element of H. Expanding graphs obtained via abelian lifts, form a key ingredient in the recent breakthrough constructions of quantum LDPC codes, (implicitly) in the fiber bundle codes by Hastings, Haah and O'Donnell [STOC 2021] achieving distance Ω̃(N^{3/5}), and in those by Panteleev and Kalachev [IEEE Trans. Inf. Theory 2021] of distance Ω(N/log(N)). However, both these constructions are non-explicit. In particular, the latter relies on a randomized construction of expander graphs via abelian lifts by Agarwal et al. [SIAM J. Discrete Math 2019]. In this work, we show the following explicit constructions of expanders obtained via abelian lifts. For every (transitive) abelian group H ⩽ Sym(𝓁), constant degree d ≥ 3 and ε > 0, we construct explicit d-regular expander graphs G obtained from an (H,𝓁)-lift of a (suitable) base n-vertex expander G₀ with the following parameters: ii) λ(G) ≤ 2√{d-1} + ε, for any lift size 𝓁 ≤ 2^{n^{δ}} where δ = δ(d,ε), iii) λ(G) ≤ ε ⋅ d, for any lift size 𝓁 ≤ 2^{n^{δ₀}} for a fixed δ₀ > 0, when d ≥ d₀(ε), or iv) λ(G) ≤ Õ(√d), for lift size "exactly" 𝓁 = 2^{Θ(n)}. As corollaries, we obtain explicit quantum lifted product codes of Panteleev and Kalachev of almost linear distance (and also in a wide range of parameters) and explicit classical quasi-cyclic LDPC codes with wide range of circulant sizes. Items (i) and (ii) above are obtained by extending the techniques of Mohanty, O'Donnell and Paredes [STOC 2020] for 2-lifts to much larger abelian lift sizes (as a byproduct simplifying their construction). This is done by providing a new encoding of special walks arising in the trace power method, carefully "compressing" depth-first search traversals. Result (iii) is via a simpler proof of Agarwal et al. [SIAM J. Discrete Math 2019] at the expense of polylog factors in the expansion. 
    more » « less
  8. Braverman, Mark (Ed.)
  9. Braverman, Mark (Ed.)
    We present an O((log n)²)-competitive algorithm for metrical task systems (MTS) on any n-point metric space that is also 1-competitive for service costs. This matches the competitive ratio achieved by Bubeck, Cohen, Lee, and Lee (2019) and the refined competitive ratios obtained by Coester and Lee (2019). Those algorithms work by first randomly embedding the metric space into an ultrametric and then solving MTS there. In contrast, our algorithm is cast as regularized gradient descent where the regularizer is a multiscale metric entropy defined directly on the metric space. This answers an open question of Bubeck (Highlights of Algorithms, 2019). 
    more » « less
  10. Braverman, Mark (Ed.)
    We introduce a hitting set generator for Polynomial Identity Testing based on evaluations of low-degree univariate rational functions at abscissas associated with the variables. In spite of the univariate nature, we establish an equivalence up to rescaling with a generator introduced by Shpilka and Volkovich, which has a similar structure but uses multivariate polynomials in the abscissas. We study the power of the generator by characterizing its vanishing ideal, i.e., the set of polynomials that it fails to hit. Capitalizing on the univariate nature, we develop a small collection of polynomials that jointly produce the vanishing ideal. As corollaries, we obtain tight bounds on the minimum degree, sparseness, and partition size of set-multi-linearity in the vanishing ideal. Inspired by an alternating algebra representation, we develop a structured deterministic membership test for the vanishing ideal. As a proof of concept we rederive known derandomization results based on the generator by Shpilka and Volkovich, and present a new application for read-once oblivious arithmetic branching programs that provably transcends the usual combinatorial techniques. 
    more » « less